7 research outputs found

    Blind source separation using statistical nonnegative matrix factorization

    Get PDF
    PhD ThesisBlind Source Separation (BSS) attempts to automatically extract and track a signal of interest in real world scenarios with other signals present. BSS addresses the problem of recovering the original signals from an observed mixture without relying on training knowledge. This research studied three novel approaches for solving the BSS problem based on the extensions of non-negative matrix factorization model and the sparsity regularization methods. 1) A framework of amalgamating pruning and Bayesian regularized cluster nonnegative tensor factorization with Itakura-Saito divergence for separating sources mixed in a stereo channel format: The sparse regularization term was adaptively tuned using a hierarchical Bayesian approach to yield the desired sparse decomposition. The modified Gaussian prior was formulated to express the correlation between different basis vectors. This algorithm automatically detected the optimal number of latent components of the individual source. 2) Factorization for single-channel BSS which decomposes an information-bearing matrix into complex of factor matrices that represent the spectral dictionary and temporal codes: A variational Bayesian approach was developed for computing the sparsity parameters for optimizing the matrix factorization. This approach combined the advantages of both complex matrix factorization (CMF) and variational -sparse analysis. BLIND SOURCE SEPARATION USING STATISTICAL NONNEGATIVE MATRIX FACTORIZATION ii 3) An imitated-stereo mixture model developed by weighting and time-shifting the original single-channel mixture where source signals can be modelled by the AR processes. The proposed mixing mixture is analogous to a stereo signal created by two microphones with one being real and another virtual. The imitated-stereo mixture employed the nonnegative tensor factorization for separating the observed mixture. The separability analysis of the imitated-stereo mixture was derived using Wiener masking. All algorithms were tested with real audio signals. Performance of source separation was assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. The experimental results demonstrate that the proposed uninformed audio separation algorithms have surpassed among the conventional BSS methods; i.e. IS-cNTF, SNMF and CMF methods, with average SDR improvement in the ranges from 2.6dB to 6.4dB per source.Payap Universit

    Automated Landslide-Risk Prediction Using Web GIS and Machine Learning Models

    Get PDF
    Spatial susceptible landslide prediction is the one of the most challenging research areas which essentially concerns the safety of inhabitants. The novel geographic information web (GIW) application is proposed for dynamically predicting landslide risk in Chiang Rai, Thailand. The automated GIW system is coordinated between machine learning technologies, web technologies, and application programming interfaces (APIs). The new bidirectional long short-term memory (Bi-LSTM) algorithm is presented to forecast landslides. The proposed algorithm consists of 3 major steps, the first of which is the construction of a landslide dataset by using Quantum GIS (QGIS). The second step is to generate the landslide-risk model based on machine learning approaches. Finally, the automated landslide-risk visualization illustrates the likelihood of landslide via Google Maps on the website. Four static factors are considered for landslide-risk prediction, namely, land cover, soil properties, elevation and slope, and a single dynamic factor i.e., precipitation. Data are collected to construct a geospatial landslide database which comprises three historical landslide locations—Phu Chifa at Thoeng District, Ban Pha Duea at Mae Salong Nai, and Mai Salong Nok in Mae Fa Luang District, Chiang Rai, Thailand. Data collection is achieved using QGIS software to interpolate contour, elevation, slope degree and land cover from the Google satellite images, aerial and site survey photographs while the physiographic and rock type are on-site surveyed by experts. The state-of-the-art machine learning models have been trained i.e., linear regression (LR), artificial neural network (ANN), LSTM, and Bi-LSTM. Ablation studies have been conducted to determine the optimal parameters setting for each model. An enhancement method based on two-stage classifications has been presented to improve the landslide prediction of LSTM and Bi-LSTM models. The landslide-risk prediction performances of these models are subsequently evaluated using real-time dataset and it is shown that Bi-LSTM with Random Forest (Bi-LSTM-RF) yields the best prediction performance. Bi-LSTM-RF model has improved the landslide-risk predicting performance over LR, ANNs, LSTM, and Bi-LSTM in terms of the area under the receiver characteristic operator (AUC) scores by 0.42, 0.27, 0.46, and 0.47, respectively. Finally, an automated web GIS has been developed and it consists of software components including the trained models, rainfall API, Google API, and geodatabase. All components have been interfaced together via JavaScript and Node.js tool

    Sound events separation and recognition using LiSparse complex nonnegative matrix factorization and multi-class mean supervector support vector machine

    No full text
    This paper proposes a novel single channel sound separation and events recognition method. First, the sound separation step is based on a complex nonnegative matrix factorization (CMF) with probabilistically optimal L1 sparsity which decomposes an information-bearing matrix into twodimensional convolution of factor matrices that represent the spectral basis and temporal code of the sources. The L1 sparsity CMF method can extract recurrent patterns of magnitude spectra that underlie observed complex spectra and the phase estimates of constituent signals, thus enabling the features of the components to be extracted more efficiently. Second, the event recognition step is built by using the multi-class mean supervector support vector (MS-SVM) machine. The separated signal from the first step is segmented by using the sliding window function and then extract features of each block. The major features which are zero-crossing rate, Mel frequency cepstral coefficients, and short-time energy are investigated to classify sound events signal into defined classes. The mean supervector is encoded from the obtained features. The multi-class MS-SVM method has been examined the recognition performance by modeling with various features. The experimental results show the robustness and efficiency of the proposed method

    Blind 2D signal direction for limited-sensor space using maximum likelihood estimation

    No full text
    Humans can automatically turn toward the origin of the sound that they hear through their both ears. This capability is crucial in daily life. A computer that can recognize the direction of the sound source is also useful for many types of applications. A proposed blind 2-dimensional (2D) signal direction using two recording sensors is developed under a limited space between the sensors in noise free environment. The proposed 2D source direction method is based on the time – delay estimation using maximum likelihood estimation by forming a histogram of power weighted spectrum corresponding to attenuation and time-delay index. The histogram - boundary method is also proposed which relates to a distance of the two microphones. In addition, the fine-tuned number of time-index bins were investigated to figure out the proper number of bins for the histogram. Given by a narrow space i.e. 3.2 centimeters, the proposed method can acceptably direct the sound-source position. In experimental testing on real-audio sources, the proposed method has demonstrated a higher level of directional performance compared with an existing method

    Sound Events Separation and Recognition using L1-Sparse Complex Nonnegative Matrix Factorization and Multi-Class Mean Supervector Support Vector Machine

    No full text
    This paper proposes a novel single channel sound separation and events recognition method. First, the sound separation step is based on a complex nonnegative matrix factorization (CMF) with probabilistically optimal L1 sparsity which decomposes an information-bearing matrix into twodimensional convolution of factor matrices that represent the spectral basis and temporal code of the sources. The L1 sparsity CMF method can extract recurrent patterns of magnitude spectra that underlie observed complex spectra and the phase estimates of constituent signals, thus enabling the features of the components to be extracted more efficiently. Second, the event recognition step is built by using the multi-class mean supervector support vector (MS-SVM) machine. The separated signal from the first step is segmented by using the sliding window function and then extract features of each block. The major features which are zero-crossing rate, Mel frequency cepstral coefficients, and short-time energy are investigated to classify sound events signal into defined classes. The mean supervector is encoded from the obtained features. The multi-class MS-SVM method has been examined the recognition performance by modeling with various features. The experimental results show the robustness and efficiency of the proposed method

    Non-Intrusive Fish Weight Estimation in Turbid Water Using Deep Learning and Regression Models

    Get PDF
    Underwater fish monitoring is the one of the most challenging problems for efficiently feeding and harvesting fish, while still being environmentally friendly. The proposed 2D computer vision method is aimed at non-intrusively estimating the weight of Tilapia fish in turbid water environments. Additionally, the proposed method avoids the issue of using high-cost stereo cameras and instead uses only a low-cost video camera to observe the underwater life through a single channel recording. An in-house curated Tilapia-image dataset and Tilapia-file dataset with various ages of Tilapia are used. The proposed method consists of a Tilapia detection step and Tilapia weight-estimation step. A Mask Recurrent-Convolutional Neural Network model is first trained for detecting and extracting the image dimensions (i.e., in terms of image pixels) of the fish. Secondly, is the Tilapia weight-estimation step, wherein the proposed method estimates the depth of the fish in the tanks and then converts the Tilapia’s extracted image dimensions from pixels to centimeters. Subsequently, the Tilapia’s weight is estimated by a trained model based on regression learning. Linear regression, random forest regression, and support vector regression have been developed to determine the best models for weight estimation. The achieved experimental results have demonstrated that the proposed method yields a Mean Absolute Error of 42.54 g, R2 of 0.70, and an average weight error of 30.30 (±23.09) grams in a turbid water environment, respectively, which show the practicality of the proposed framework

    Efficient Noisy Sound-Event Mixture Classification Using Adaptive-Sparse Complex-Valued Matrix Factorization and OvsO SVM

    Get PDF
    This paper proposes a solution for events classification from a sole noisy mixture that consist of two major steps: a sound-event separation and a sound-event classification. The traditional complex nonnegative matrix factorization (CMF) is extended by cooperation with the optimal adaptive L1 sparsity to decompose a noisy single-channel mixture. The proposed adaptive L1 sparsity CMF algorithm encodes the spectra pattern and estimates the phase of the original signals in time-frequency representation. Their features enhance the temporal decomposition process efficiently. The support vector machine (SVM) based one versus one (OvsO) strategy was applied with a mean supervector to categorize the demixed sound into the matching sound-event class. The first step of the multi-class MSVM method is to segment the separated signal into blocks by sliding demixed signals, then encoding the three features of each block. Mel frequency cepstral coefficients, short-time energy, and short-time zero-crossing rate are learned with multi sound-event classes by the SVM based OvsO method. The mean supervector is encoded from the obtained features. The proposed method has been evaluated with both separation and classification scenarios using real-world single recorded signals and compared with the state-of-the-art separation method. Experimental results confirmed that the proposed method outperformed the state-of-the-art methods
    corecore